amplitude image
A Bi-channel Aided Stitching of Atomic Force Microscopy Images
Zhao, Huanhuan, Millan-Solsona, Ruben, Checa, Marti, Brown, Spenser R., Morrell-Falvey, Jennifer L., Collins, Liam, Biswas, Arpan
Microscopy is an essential tool in scientific research, enabling the visualization of structures at micro- and nanoscale resolutions. However, the field of microscopy often encounters limitations in field-of-view (FOV), restricting the amount of sample that can be imaged in a single capture. To overcome this limitation, image stitching techniques have been developed to seamlessly merge multiple overlapping images into a single, high-resolution composite. The images collected from microscope need to be optimally stitched before accurate physical information can be extracted from post analysis. However, the existing stitching tools either struggle to stitch images together when the microscopy images are feature sparse or cannot address all the transformations of images. To address these issues, we propose a bi-channel aided feature-based image stitching method and demonstrate its use on AFM generated biofilm images. The topographical channel image of AFM data captures the morphological details of the sample, and a stitched topographical image is desired for researchers. We utilize the amplitude channel of AFM data to maximize the matching features and to estimate the position of the original topographical images and show that the proposed bi-channel aided stitching method outperforms the traditional stitching approach. Furthermore, we found that the differentiation of the topographical images along the x-axis provides similar feature information to the amplitude channel image, which generalizes our approach when the amplitude images are not available. Here we demonstrated the application on AFM, but similar approaches could be employed of optical microscopy with brightfield and fluorescence channels. We believe this proposed workflow will benefit the experimentalist to avoid erroneous analysis and discovery due to incorrect stitching.
3D Face Reconstruction From Radar Images
Braeutigam, Valentin, Wirth, Vanessa, Ullmann, Ingrid, Schüßler, Christian, Vossiek, Martin, Berking, Matthias, Egger, Bernhard
The 3D reconstruction of faces gains wide attention in computer vision and is used in many fields of application, for example, animation, virtual reality, and even forensics. This work is motivated by monitoring patients in sleep laboratories. Due to their unique characteristics, sensors from the radar domain have advantages compared to optical sensors, namely penetration of electrically non-conductive materials and independence of light. These advantages of radar signals unlock new applications and require adaptation of 3D reconstruction frameworks. We propose a novel model-based method for 3D reconstruction from radar images. We generate a dataset of synthetic radar images with a physics-based but non-differentiable radar renderer. This dataset is used to train a CNN-based encoder to estimate the parameters of a 3D morphable face model. Whilst the encoder alone already leads to strong reconstructions of synthetic data, we extend our reconstruction in an Analysis-by-Synthesis fashion to a model-based autoencoder. This is enabled by learning the rendering process in the decoder, which acts as an object-specific differentiable radar renderer. Subsequently, the combination of both network parts is trained to minimize both, the loss of the parameters and the loss of the resulting reconstructed radar image. This leads to the additional benefit, that at test time the parameters can be further optimized by finetuning the autoencoder unsupervised on the image loss. We evaluated our framework on generated synthetic face images as well as on real radar images with 3D ground truth of four individuals.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- (6 more...)
OAH-Net: A Deep Neural Network for Hologram Reconstruction of Off-axis Digital Holographic Microscope
Liu, Wei, Delikoyun, Kerem, Chen, Qianyu, Yildiz, Alperen, Myo, Si Ko, Kuan, Win Sen, Soong, John Tshon Yit, Cove, Matthew Edward, Hayden, Oliver, Lee, Hweekuan
Digital holographic microscopy (DHM) is emerging as an innovative imaging modality in computational microscopy. It provides high-resolution, quantitative, and threedimensional information about samples without labelling. These unique features make DHM a promising technique for imaging living cells, as it captures intracellular structures while preserving cells in their natural state, which could be used for more precise analysis [1-4]. DHM records the interference pattern between the object and the reference beams, which is then reconstructed using algorithms to retrieve the wave of the object beam in terms of phase and amplitude [5-7]. Holography can be classified into two main types based on beam alignment: inline holography and off-axis holography [8, 9]. For inline holography, the reference beam is parallel to the object beam. Although the setup is relatively simple, the hologram reconstruction requires multiple exposures of the same sample at varying sample-to-sensor distances.
- North America > United States (0.28)
- Asia > Singapore (0.06)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Belgium (0.04)
Hiding Local Manipulations on SAR Images: a Counter-Forensic Attack
Mandelli, Sara, Cannas, Edoardo Daniele, Bestagini, Paolo, Tebaldini, Stefano, Tubaro, Stefano
The vast accessibility of Synthetic Aperture Radar (SAR) images through online portals has propelled the research across various fields. This widespread use and easy availability have unfortunately made SAR data susceptible to malicious alterations, such as local editing applied to the images for inserting or covering the presence of sensitive targets. Vulnerability is further emphasized by the fact that most SAR products, despite their original complex nature, are often released as amplitude-only information, allowing even inexperienced attackers to edit and easily alter the pixel content. To contrast malicious manipulations, in the last years the forensic community has begun to dig into the SAR manipulation issue, proposing detectors that effectively localize the tampering traces in amplitude images. Nonetheless, in this paper we demonstrate that an expert practitioner can exploit the complex nature of SAR data to obscure any signs of manipulation within a locally altered amplitude image. We refer to this approach as a counter-forensic attack. To achieve the concealment of manipulation traces, the attacker can simulate a re-acquisition of the manipulated scene by the SAR system that initially generated the pristine image. In doing so, the attacker can obscure any evidence of manipulation, making it appear as if the image was legitimately produced by the system. We assess the effectiveness of the proposed counter-forensic approach across diverse scenarios, examining various manipulation operations. The obtained results indicate that our devised attack successfully eliminates traces of manipulation, deceiving even the most advanced forensic detectors.
- North America > United States (0.14)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.04)
- Europe > United Kingdom (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
Domain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach
Kim, Minyoung, Li, Da, Hospedales, Timothy
We tackle the domain generalisation (DG) problem by posing it as a domain adaptation (DA) task where we adversarially synthesise the worst-case target domain and adapt a model to that worst-case domain, thereby improving the model's robustness. To synthesise data that is challenging yet semantics-preserving, we generate Fourier amplitude images and combine them with source domain phase images, exploiting the widely believed conjecture from signal processing that amplitude spectra mainly determines image style, while phase data mainly captures image semantics. To synthesise a worst-case domain for adaptation, we train the classifier and the amplitude generator adversarially. Specifically, we exploit the maximum classifier discrepancy (MCD) principle from DA that relates the target domain performance to the discrepancy of classifiers in the model hypothesis space. By Bayesian hypothesis modeling, we express the model hypothesis space effectively as a posterior distribution over classifiers given the source domains, making adversarial MCD minimisation feasible. On the DomainBed benchmark including the large-scale DomainNet dataset, the proposed approach yields significantly improved domain generalisation performance over the state-of-the-art.
- Information Technology > Sensing and Signal Processing > Image Processing (0.93)
- Information Technology > Artificial Intelligence > Vision (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)